113 research outputs found

    Interacting "Through the Display"

    Get PDF
    The increasing availability of displays at lower costs has led to a proliferation of such in our everyday lives. Additionally, mobile devices are ready to hand and have been proposed as interaction devices for external screens. However, only their input mechanism was taken into account without considering three additional factors in environments hosting several displays: first, a connection needs to be established to the desired target display (modality). Second, screens in the environment may be re-arranged (flexibility). And third, displays may be out of the user’s reach (distance). In our research we aim to overcome the problems resulting from these characteristics. The overall goal is a new interaction model that allows for (1) a non-modal connection mechanism for impromptu use on various displays in the environment, (2) interaction on and across displays in highly flexible environments, and (3) interacting at variable distances. In this work we propose a new interaction model called through the display interaction which enables users to interact with remote content on their personal device in an absolute and direct fashion. To gain a better understanding of the effects of the additional characteristics, we implemented two prototypes each of which investigates a different distance to the target display: LucidDisplay allows users to place their mobile device directly on top of a larger external screen. MobileVue on the other hand enables users to interact with an external screen at a distance. In each of these prototypes we analyzed their effects on the remaining two criteria – namely the modality of the connection mechanism as well as the flexibility of the environment. With the findings gained in this initial phase we designed Shoot & Copy, a system that allows the detection of screens purely based on their visual content. Users aim their personal device’s camera at the target display which then appears in live video shown in the viewfinder. To select an item, users take a picture which is analyzed to determine the targeted region. We further extended this approach to multiple displays by using a centralized component serving as gateway to the display environment. In Tap & Drop we refined this prototype to support real-time feedback. Instead of taking pictures, users can now aim their mobile device at the display resulting and start interacting immediately. In doing so, we broke the rigid sequential interaction of content selection and content manipulation. Both prototypes allow for (1) connections in a non-modal way (i.e., aim at the display and start interacting with it) from the user’s point of view and (2) fully flexible environments (i.e., the mobile device tracks itself with respect to displays in the environment). However, the wide-angle lenses and thus greater field of views of current mobile devices still do not allow for variable distances. In Touch Projector, we overcome this limitation by introducing zooming in combination with temporarily freezing the video image. Based on our extensions to taxonomy of mobile device interaction on external displays, we created a refined model of interacting through the display for mobile use. It enables users to interact impromptu without explicitly establishing a connection to the target display (non-modal). As the mobile device tracks itself with respect to displays in the environment, the model further allows for full flexibility of the environment (i.e., displays can be re-arranged without affecting on the interaction). And above all, users can interact with external displays regardless of their actual size at variable distances without any loss of accuracy.Die steigende Verfügbarkeit von Bildschirmen hat zu deren Verbreitung in unserem Alltag geführt. Ferner sind mobile Geräte immer griffbereit und wurden bereits als Interaktionsgeräte für zusätzliche Bildschirme vorgeschlagen. Es wurden jedoch nur Eingabemechanismen berücksichtigt ohne näher auf drei weitere Faktoren in Umgebungen mit mehreren Bildschirmen einzugehen: (1) Beide Geräte müssen verbunden werden (Modalität). (2) Bildschirme können in solchen Umgebungen umgeordnet werden (Flexibilität). (3) Monitore können außer Reichweite sein (Distanz). Wir streben an, die Probleme, die durch diese Eigenschaften auftreten, zu lösen. Das übergeordnete Ziel ist ein Interaktionsmodell, das einen nicht-modalen Verbindungsaufbau für spontane Verwendung von Bildschirmen in solchen Umgebungen, (2) Interaktion auf und zwischen Bildschirmen in flexiblen Umgebungen, und (3) Interaktionen in variablen Distanzen erlaubt. Wir stellen ein Modell (Interaktion durch den Bildschirm) vor, mit dem Benutzer mit entfernten Inhalten in direkter und absoluter Weise auf ihrem Mobilgerät interagieren können. Um die Effekte der hinzugefügten Charakteristiken besser zu verstehen, haben wir zwei Prototypen für unterschiedliche Distanzen implementiert: LucidDisplay erlaubt Benutzern ihr mobiles Gerät auf einen größeren, sekundären Bildschirm zu legen. Gegensätzlich dazu ermöglicht MobileVue die Interaktion mit einem zusätzlichen Monitor in einer gewissen Entfernung. In beiden Prototypen haben wir dann die Effekte der verbleibenden zwei Kriterien (d.h. Modalität des Verbindungsaufbaus und Flexibilität der Umgebung) analysiert. Mit den in dieser ersten Phase erhaltenen Ergebnissen haben wir Shoot & Copy entworfen. Dieser Prototyp erlaubt die Erkennung von Bildschirmen einzig über deren visuellen Inhalt. Benutzer zeigen mit der Kamera ihres Mobilgeräts auf einen Bildschirm dessen Inhalt dann in Form von Video im Sucher dargestellt wird. Durch die Aufnahme eines Bildes (und der darauf folgenden Analyse) wird Inhalt ausgewählt. Wir haben dieses Konzept zudem auf mehrere Bildschirme erweitert, indem wir eine zentrale Instanz verwendet haben, die als Schnittstelle zur Umgebung agiert. Mit Tap & Drop haben wir den Prototyp verfeinert, um Echtzeit-Feedback zu ermöglichen. Anstelle der Bildaufnahme können Benutzer nun ihr mobiles Gerät auf den Bildschirm richten und sofort interagieren. Dadurch haben wir die strikt sequentielle Interaktion (Inhalt auswählen und Inhalt manipulieren) aufgebrochen. Beide Prototypen erlauben bereits nicht-modale Verbindungsmechanismen in flexiblen Umgebungen. Die in heutigen Mobilgeräten verwendeten Weitwinkel-Objektive erlauben jedoch nach wie vor keine variablen Distanzen. Mit Touch Projector beseitigen wir diese Einschränkung, indem wir Zoomen in Kombination mit einer vorübergehenden Pausierung des Videos im Sucher einfügen. Basierend auf den Erweiterungen der Klassifizierung von Interaktionen mit zusätzlichen Bildschirmen durch mobile Geräte haben wir ein verbessertes Modell (Interaktion durch den Bildschirm) erstellt. Es erlaubt Benutzern spontan zu interagieren, ohne explizit eine Verbindung zum zweiten Bildschirm herstellen zu müssen (nicht-modal). Da das mobile Gerät seinen räumlichen Bezug zu allen Bildschirmen selbst bestimmt, erlaubt unser Modell zusätzlich volle Flexibilität in solchen Umgebungen. Darüber hinaus können Benutzer mit zusätzlichen Bildschirmen (unabhängig von deren Größe) in variablen Entfernungen interagieren

    Cost-effectiveness of non-invasive methods for assessment and monitoring of liver fibrosis and cirrhosis in patients with chronic liver disease: systematic review and economic evaluation

    Get PDF
    BACKGROUND: Liver biopsy is the reference standard for diagnosing the extent of fibrosis in chronic liver disease; however, it is invasive, with the potential for serious complications. Alternatives to biopsy include non-invasive liver tests (NILTs); however, the cost-effectiveness of these needs to be established. OBJECTIVE: To assess the diagnostic accuracy and cost-effectiveness of NILTs in patients with chronic liver disease. DATA SOURCES: We searched various databases from 1998 to April 2012, recent conference proceedings and reference lists. METHODS: We included studies that assessed the diagnostic accuracy of NILTs using liver biopsy as the reference standard. Diagnostic studies were assessed using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. Meta-analysis was conducted using the bivariate random-effects model with correlation between sensitivity and specificity (whenever possible). Decision models were used to evaluate the cost-effectiveness of the NILTs. Expected costs were estimated using a NHS perspective and health outcomes were measured as quality-adjusted life-years (QALYs). Markov models were developed to estimate long-term costs and QALYs following testing, and antiviral treatment where indicated, for chronic hepatitis B (HBV) and chronic hepatitis C (HCV). NILTs were compared with each other, sequential testing strategies, biopsy and strategies including no testing. For alcoholic liver disease (ALD), we assessed the cost-effectiveness of NILTs in the context of potentially increasing abstinence from alcohol. Owing to a lack of data and treatments specifically for fibrosis in patients with non-alcoholic fatty liver disease (NAFLD), the analysis was limited to an incremental cost per correct diagnosis. An analysis of NILTs to identify patients with cirrhosis for increased monitoring was also conducted. RESULTS: Given a cost-effectiveness threshold of ÂŁ20,000 per QALY, treating everyone with HCV without prior testing was cost-effective with an incremental cost-effectiveness ratio (ICER) of ÂŁ9204. This was robust in most sensitivity analyses but sensitive to the extent of treatment benefit for patients with mild fibrosis. For HBV [hepatitis B e antigen (HBeAg)-negative)] this strategy had an ICER of ÂŁ28,137, which was cost-effective only if the upper bound of the standard UK cost-effectiveness threshold range (ÂŁ30,000) is acceptable. For HBeAg-positive disease, two NILTs applied sequentially (hyaluronic acid and magnetic resonance elastography) were cost-effective at a ÂŁ20,000 threshold (ICER: ÂŁ19,612); however, the results were highly uncertain, with several test strategies having similar expected outcomes and costs. For patients with ALD, liver biopsy was the cost-effective strategy, with an ICER of ÂŁ822. LIMITATIONS: A substantial number of tests had only one study from which diagnostic accuracy was derived; therefore, there is a high risk of bias. Most NILTs did not have validated cut-offs for diagnosis of specific fibrosis stages. The findings of the ALD model were dependent on assuptions about abstinence rates assumptions and the modelling approach for NAFLD was hindered by the lack of evidence on clinically effective treatments. CONCLUSIONS: Treating everyone without NILTs is cost-effective for patients with HCV, but only for HBeAg-negative if the higher cost-effectiveness threshold is appropriate. For HBeAg-positive, two NILTs applied sequentially were cost-effective but highly uncertain. Further evidence for treatment effectiveness is required for ALD and NAFLD. STUDY REGISTRATION: This study is registered as PROSPERO CRD42011001561. FUNDING: The National Institute for Health Research Health Technology Assessment programme

    Proceedings of the 29th EG-ICE International Workshop on Intelligent Computing in Engineering

    Get PDF
    This publication is the Proceedings of the 29th EG-ICE International Workshop on Intelligent Computing in Engineering from July 6-8, 2022. The EG-ICE International Workshop on Intelligent Computing in Engineering brings together international experts working on the interface between advanced computing and modern engineering challenges. Many engineering tasks require open-world resolution of challenges such as supporting multi-actor collaboration, coping with approximate models, providing effective engineer-computer interaction, search in multi-dimensional solution spaces, accommodating uncertainty, including specialist domain knowledge, performing sensor-data interpretation and dealing with incomplete knowledge. While results from computer science provide much initial support for resolution, adaptation is unavoidable and most importantly, feedback from addressing engineering challenges drives fundamental computer-science research. Competence and knowledge transfer goes both ways. &nbsp

    Proceedings of the 29th EG-ICE International Workshop on Intelligent Computing in Engineering

    Get PDF
    This publication is the Proceedings of the 29th EG-ICE International Workshop on Intelligent Computing in Engineering from July 6-8, 2022. The EG-ICE International Workshop on Intelligent Computing in Engineering brings together international experts working on the interface between advanced computing and modern engineering challenges. Many engineering tasks require open-world resolution of challenges such as supporting multi-actor collaboration, coping with approximate models, providing effective engineer-computer interaction, search in multi-dimensional solution spaces, accommodating uncertainty, including specialist domain knowledge, performing sensor-data interpretation and dealing with incomplete knowledge. While results from computer science provide much initial support for resolution, adaptation is unavoidable and most importantly, feedback from addressing engineering challenges drives fundamental computer-science research. Competence and knowledge transfer goes both ways. &nbsp

    The Haptic Touch toolkit : enabling exploration of haptic interactions

    Get PDF
    This work is partially funded by the AITF/NSERC/SMART Chair in Interactive Technologies, Alberta Innovates Tech. Futures, NSERC, and SMART Technologies.In the real world, touch based interaction relies on haptic feedback (e.g., grasping objects, feeling textures). Unfortunately, such feedback is absent in current tabletop systems. The previously developed Haptic Tabletop Puck (HTP) aims at supporting experimentation with and development of inexpensive tabletop haptic interfaces in a do-it-yourself fashion. The problem is that programming the HTP (and haptics in general) is difficult. To address this problem, we contribute the Haptictouch toolkit, which enables developers to rapidly prototype haptic tabletop applications. Our toolkit is structured in three layers that enable programmers to: (1) directly control the device, (2) create customized combinable haptic behaviors (e.g., softness, oscillation), and (3) use visuals (e.g., shapes, images, buttons) to quickly make use of these behaviors. In our preliminary exploration we found that programmers could use our toolkit to create haptic tabletop applications in a short amount of time.Postprin

    Interacting" Through the Display"

    No full text
    • …
    corecore